basearts - Beginning Vocabulary

DIGITAL IMAGES
are electronic snapshots taken of a scene or scanned from documents, such as photographs, manuscripts, printed texts, and artwork. The digital image is sampled and mapped as a grid of dots or picture elements (pixels). Each pixel is assigned a tonal value (black, white, shades of gray or color), which is represented in binary code (zeros and ones). The binary digits ("bits") for each pixel are stored in a sequence by a computer and often reduced to a mathematical representation (compressed). The bits are then interpreted and read by the computer to produce an analog version for display or printing. Pixel Values: As shown in this bitonal image, each pixel is assigned a tonal value, in this example 0 for black and 1 for white.

RESOLUTION is the ability to distinguish fine spatial detail. The spatial frequency at which a digital image is sampled (the sampling frequency) is often a good indicator of resolution. This is why dots-per-inch (dpi) or pixels-per-inch (ppi) are common and synonymous terms used to express resolution for digital images. Generally, but within limits, increasing the sampling frequency also helps to increase resolution.

PIXEL DIMENSIONS are the horizontal and vertical measurements of an image expressed in pixels. The pixel dimensions may be determined by multiplying both the width and the height by the dpi. A digital camera will also have pixel dimensions, expressed as the number of pixels horizontally and vertically that define its resolution (e.g., 2,048 by 3,072). Calculate the dpi achieved by dividing a document's dimension into the corresponding pixel dimension against which it is aligned. An 8" x 10" document that is scanned at 300 dpi has the pixel dimensions of 2,400 pixels (8" x 300 dpi) by 3,000 pixels (10" x 300 dpi).

BIT DEPTH is determined by the number of bits used to define each pixel. The greater the bit depth, the greater the number of tones (grayscale or color) that can be represented. Digital images may be produced in black and white (bitonal), grayscale, or color. A bitonal image is represented by pixels consisting of 1 bit each, which can represent two tones (typically black and white), using the values 0 for black and 1 for white or vice versa. A grayscale image is composed of pixels represented by multiple bits of information, typically ranging from 2 to 8 bits or more. Example: In a 2-bit image, there are four possible combinations: 00, 01, 10, and 11. If "00" represents black, and "11" represents white, then "01" equals dark gray and "10" equals light gray. The bit depth is two, but the number of tones that can be represented is 2 2 or 4. At 8 bits, 256 (2 8 ) different tones can be assigned to each pixel. A color image is typically represented by a bit depth ranging from 8 to 24 or higher. With a 24-bit image, the bits are often divided into three groupings: 8 for red, 8 for green, and 8 for blue. Combinations of those bits are used to represent other colors. A 24-bit image offers 16.7 million (2 24 ) color values. Increasingly scanners are capturing 10 bits or more per color channel and often outputting 8 bits to compensate for "noise" in the scanner and to present an image that more closely mimics human perception. Binary calculations for the number of tones represented by common bit depths: 1 bit (21) = 2 tones 2 bits (22) = 4 tones 3 bits (23) = 8 tones 4 bits (24) = 16 tones 8 bits (28) = 256 tones 16 bits (216) = 65,536 tones 24 bits (224) = 16.7 million tones

DYNAMIC RANGE is the range of tonal difference between the lightest light and darkest dark of an image. The higher the dynamic range, the more potential shades can be represented, although the dynamic range does not automatically correlate to the number of tones reproduced. For instance, high-contrast microfilm exhibits a broad dynamic range, but renders few tones. Dynamic range also describes a digital system's ability to reproduce tonal information. This capability is most important for continuous-tone documents that exhibit smoothly varying tones, and for photographs it may be the single most important aspect of image quality.

FILE SIZE is calculated by multiplying the surface area of a document (height x width) to be scanned by the bit depth and the dpi2. Because image file size is represented in bytes, which are made up of 8 bits, divide this figure by 8. Formula 1 for File Size File Size = (height x width x bit depth x dpi2) / 8 If the pixel dimensions are given, multiply them by each other and the bit depth to determine the number of bits in an image file. For instance, if a 24-bit image is captured with a digital camera with pixel dimensions of 2,048 x 3,072, then the file size equals (2048 x 3072 x 24)/8, or 18,874,368 bytes. Formula 2 for File Size File Size = (pixel dimensions x bit depth) / 8 File size naming convention: Because digital images often result in very large files, the number of bytes is usually represented in increments of 210 (1,024) or more: 1 Kilobyte (KB) = 1,024 bytes 1 Megabyte (MB) = 1,024 KB 1 Gigabyte (GB) = 1,024 MB 1 Terabyte (TB) = 1,024 GB

COMPRESSION is used to reduce image file size for storage, processing, and transmission. The file size for digital images can be quite large, taxing the computing and networking capabilities of many systems. All compression techniques abbreviate the string of binary code in an uncompressed image to a form of mathematical shorthand, based on complex algorithms. There are standard and proprietary compression techniques available. In general it is better to utilize a standard and broadly supported one than a proprietary one that may offer more efficient compression and/or better quality, but which may not lend itself to long-term use or digital preservation strategies. There is considerable debate in the library and archival community over the use of compression in master image files. Compression schemes can be further characterized as either lossless or lossy. Lossless schemes, such as ITU-T.6, abbreviate the binary code without discarding any information, so that when the image is "decompressed" it is bit for bit identical to the original. Lossy schemes, such as JPEG, utilize a means for averaging or discarding the least significant information, based on an understanding of visual perception. However, it may be extremely difficult to detect the effects of lossy compression, and the image may be considered "visually lossless." Lossless compression is most often used with bitonal scanning of textual material. Lossy compression is typically used with tonal images, and in particular continuous tone images where merely abbreviating the information will not result in any appreciable file savings.

FILE FORMATS consist of both the bits that comprise the image and header information on how to read and interpret the file. File formats vary in terms of resolution, bit-depth, color capabilities, and support for compression and metadata.


White Blance - http://en.wikipedia.org/wiki/White_balance

In photography and image processing, color balance is the global adjustment of the intensities of the colors (typically red, green, and blue primary colors). An important goal of this adjustment is to render specific colors – particularly neutral colors – correctly; hence, the general method is sometimes called gray balance, neutral balance, or white balance. Color balance changes the overall mixture of colors in an image and is used for color correction; generalized versions of color balance are used to get colors other than neutrals to also appear correct or pleasing.

Color temperature - http://en.wikipedia.org/wiki/Color_temperature

In digital photography, color temperature is sometimes used interchangeably with white balance, which allow a remapping of color values to simulate variations in ambient color temperature. Most digital cameras and RAW image software provide presets simulating specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit entry of white balance values in Kelvin. These settings vary color values along the blue–yellow axis, while some software includes additional controls (sometimes labeled tint) adding the magenta–green axis.

Kelvin - Wikipedia http://en.wikipedia.org/wiki/Kelvin

The kelvin is often used in the measure of the color temperature of light sources. Color temperature is based upon the principle that a black body radiator emits light whose color depends on the temperature of the radiator. Black bodies with temperatures below about 4000 K appear reddish whereas those above about 7500 K appear bluish. Color temperature is important in the fields of image projection and photography where a color temperature of approximately 5600 K is required to match "daylight" film emulsions. In astronomy, the stellar classification of stars and their place on the Hertzsprung-Russell diagram are based, in part, upon their surface temperature, known as effective temperature. The photosphere of the Sun, for instance, has an effective temperature of 5778 K.


NOISE - http://en.wikipedia.org/wiki/Image_noise

In low light, correct exposure requires the use of long shutter speeds, higher gain (ISO sensitivity), or both. On most cameras, longer shutter speeds lead to increased salt-and-pepper noise due to photodiode leakage currents. At the cost of a doubling of read noise variance (41% increase in read noise standard deviation), this salt-and-pepper noise can be mostly eliminated by dark frame subtraction. Banding noise, similar to shadow noise, can be introduced through brightening shadows or through color-balance processing[14].

The relative effect of both read noise and shot noise increase as the exposure is reduced, corresponding to increased ISO sensitivity, since fewer photons are counted (shot noise) and since more amplification of the signal is necessary.

Noise problems with digital cameras

Image ABOVE has exposure time of >10 seconds in low light. The image on the right has adequate lighting and 0.1 second exposure.